In this notebook we try to practice all the classification algorithms that we have learned in this course.
We load a dataset using Pandas library, and apply the following algorithms, and find the best one for this specific dataset by accuracy evaluation methods.
Let's first load required libraries:
import itertools
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import NullFormatter
import pandas as pd
import numpy as np
import matplotlib.ticker as ticker
from sklearn import preprocessing
%matplotlib inline
This dataset is about past loans. The Loan_train.csv data set includes details of 346 customers whose loan are already paid off or defaulted. It includes following fields:
Field | Description |
---|---|
Loan_status | Whether a loan is paid off on in collection |
Principal | Basic principal loan amount at the |
Terms | Origination terms which can be weekly (7 days), biweekly, and monthly payoff schedule |
Effective_date | When the loan got originated and took effects |
Due_date | Since it’s one-time payoff schedule, each loan has one single due date |
Age | Age of applicant |
Education | Education of applicant |
Gender | The gender of applicant |
Let's download the dataset
#!wget -O loan_train.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/FinalModule_Coursera/data/loan_train.csv
df = pd.read_csv('loan_train.csv')
df.head()
df.shape
df['due_date'] = pd.to_datetime(df['due_date'])
df['effective_date'] = pd.to_datetime(df['effective_date'])
df.head()
Let’s see how many of each class is in our data set
df['loan_status'].value_counts()
260 people have paid off the loan on time while 86 have gone into collection
Let's plot some columns to underestand data better:
# notice: installing seaborn might takes a few minutes
#!conda install -c anaconda seaborn -y
import seaborn as sns
bins = np.linspace(df.Principal.min(), df.Principal.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'Principal', bins=bins, ec="k")
g.axes[-1].legend()
plt.show()
bins = np.linspace(df.age.min(), df.age.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'age', bins=bins, ec="k")
g.axes[-1].legend()
plt.show()
df['dayofweek'] = df['effective_date'].dt.dayofweek
bins = np.linspace(df.dayofweek.min(), df.dayofweek.max(), 10)
g = sns.FacetGrid(df, col="Gender", hue="loan_status", palette="Set1", col_wrap=2)
g.map(plt.hist, 'dayofweek', bins=bins, ec="k")
g.axes[-1].legend()
plt.show()
We see that people who get the loan at the end of the week don't pay it off, so let's use Feature binarization to set a threshold value less than day 4
df['weekend'] = df['dayofweek'].apply(lambda x: 1 if (x>3) else 0)
df.head()
Let's look at gender:
df.groupby(['Gender'])['loan_status'].value_counts(normalize=True)
86 % of female pay there loans while only 73 % of males pay there loan
Let's convert male to 0 and female to 1:
df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True)
df.head()
df.groupby(['education'])['loan_status'].value_counts(normalize=True)
df[['Principal','terms','age','Gender','education']].head()
Feature = df[['Principal','terms','age','Gender','weekend']]
Feature = pd.concat([Feature,pd.get_dummies(df['education'])], axis=1)
Feature.drop(['Master or Above'], axis = 1,inplace=True)
Feature.head()
Let's define feature sets, X:
X = Feature
X[0:5]
What are our lables?
y = df['loan_status'].values
y[0:5]
Data Standardization give data zero mean and unit variance (technically should be done after train test split)
X= preprocessing.StandardScaler().fit(X).transform(X)
X[0:5]
Now, it is your turn, use the training set to build an accurate model. Then use the test set to report the accuracy of the model You should use the following algorithm:
__ Notice:__
Notice: You should find the best k to build the model with the best accuracy.\ warning: You should not use the loan_test.csv for finding the best k, however, you can split your train_loan.csv into train and test to find the best k.
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4)
print('Shape of X training set {}'.format(X_train.shape),'&',' Size of Y training set {}'.format(y_train.shape))
Ks = 10
mean_acc = np.zeros((Ks-1))
std_acc = np.zeros((Ks-1))
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics
for n in range(1,Ks):
#Train Model and Predict
neigh = KNeighborsClassifier(n_neighbors = n).fit(X_train,y_train)
yhat = neigh.predict(X_test)
mean_acc[n-1] = metrics.accuracy_score(y_test, yhat)
std_acc[n-1] = np.std(yhat==y_test)/np.sqrt(yhat.shape[0])
print(mean_acc)
print( "The best accuracy was with", mean_acc.max(), "with k=", mean_acc.argmax()+1)
for n in range(1,Ks):
neigh = KNeighborsClassifier(n_neighbors = n).fit(X_train,y_train)
yhat = neigh.predict(X_test)
if (metrics.accuracy_score(y_test, yhat) == mean_acc.max()):
KNNmodel = neigh
break
from sklearn import metrics
from sklearn.tree import DecisionTreeClassifier
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.6, random_state=2)
DSmodel = DecisionTreeClassifier(criterion="entropy", max_depth = 4)
DSmodel.fit(X_train,y_train)
predTree = DSmodel.predict(X_test)
print (predTree [0:5])
print (y_test [0:5])
print("DecisionTrees's Accuracy: ", metrics.accuracy_score(y_test, predTree))
from sklearn import svm
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=4)
clf = svm.SVC(kernel='rbf', probability=True)
clf.fit(X_train, y_train)
predsvm = clf.predict(X_test)
print (predsvm [0:5])
print (y_test [0:5])
print("Support Vector Machine's Accuracy: ", metrics.accuracy_score(y_test, predsvm))
from sklearn.linear_model import LogisticRegression
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=6)
LR = LogisticRegression(C=0.01, solver='liblinear')
LR.fit(X_train,y_train)
predlr = LR.predict(X_test)
print (predlr [0:5])
print (y_test [0:5])
print("Logistic Regression's Accuracy: ", metrics.accuracy_score(y_test, predlr))
y_lr = (y_test, predlr)
from sklearn.metrics import jaccard_score
from sklearn.metrics import f1_score
from sklearn.metrics import log_loss
First, download and load the test set:
#!wget -O loan_test.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_test.csv
test_df = pd.read_csv('loan_test.csv')
test_df.head()
Models = {'KNN':KNNmodel, 'DecisionTree':DSmodel, 'SVM':clf, 'LogisticRegression':LR}
test_df['due_date'] = pd.to_datetime(test_df['due_date'])
test_df['effective_date'] = pd.to_datetime(test_df['effective_date'])
test_df['dayofweek'] = test_df['effective_date'].dt.dayofweek
test_df['weekend'] = test_df['dayofweek'].apply(lambda x: 1 if (x>3) else 0)
test_df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True)
test_Feature = test_df[['Principal','terms','age','Gender','weekend']]
test_Feature = pd.concat([test_Feature,pd.get_dummies(test_df['education'])], axis=1)
test_Feature.drop(['Master or Above'], axis = 1, inplace=True)
test_Feature.fillna(test_Feature.mean(), inplace=True)
X_test = preprocessing.StandardScaler().fit(test_Feature).transform(test_Feature)
y_test = test_df['loan_status'].values
test_df['loan_status'].value_counts()
Jaccards = []
for algorithm, model in Models.items():
locals()['yhat_{0}'.format(algorithm)] = model.predict(X_test)
locals()['Jaccard_{0}'.format(algorithm)] = \
[
jaccard_score(
y_test,
locals()['yhat_{0}'.format(algorithm)],
pos_label = 'COLLECTION'
),
jaccard_score(
y_test,
locals()['yhat_{0}'.format(algorithm)],
pos_label = 'PAIDOFF'
)
]
Jaccards.append(locals()['Jaccard_{0}'.format(algorithm)])
JaccardDict = dict(zip(list(Models.keys()), Jaccards))
print("Jaccard:")
pd.DataFrame(JaccardDict).transpose().iloc[:,[1]]
# F1-score
F1_scores = []
for algorithm, model in Models.items():
locals()['yhat_{0}'.format(algorithm)] = model.predict(X_test)
locals()['F1_score_{0}'.format(algorithm)] = \
f1_score(
y_test,
locals()['yhat_{0}'.format(algorithm)],
average = None
)
F1_scores.append(locals()['F1_score_{0}'.format(algorithm)])
F1_scoresDict = dict(zip(list(Models.keys()), F1_scores))
print("F1-score:")
pd.DataFrame(F1_scoresDict).transpose().iloc[:,[1]]
# LogLoss
predlr_probas = LR.predict_proba(X_test)
log_loss(y_test, predlr_probas)
You should be able to report the accuracy of the built model using different evaluation metrics:
Algorithm | Jaccard | F1-score | LogLoss |
---|---|---|---|
KNN | ? | ? | NA |
Decision Tree | ? | ? | NA |
SVM | ? | ? | NA |
LogisticRegression | ? | ? | ? |
IBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems – by your enterprise as a whole. A free trial is available through this course, available here: SPSS Modeler
Also, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at Watson Studio
Saeed Aghabozorgi, PhD is a Data Scientist in IBM with a track record of developing enterprise level applications that substantially increases clients’ ability to turn data into actionable knowledge. He is a researcher in data mining field and expert in developing advanced analytic methods like machine learning and statistical modelling on large datasets.
Date (YYYY-MM-DD) | Version | Changed By | Change Description |
---|---|---|---|
2020-10-27 | 2.1 | Lakshmi Holla | Made changes in import statement due to updates in version of sklearn library |
2020-08-27 | 2.0 | Malika Singla | Added lab to GitLab |